8 research outputs found

    Adaptable volumetric liver segmentation model for CT images using region-based features and convolutional neural network

    Get PDF
    Liver plays an important role in metabolic processes, therefore fast diagnosis and potential surgical plan- ning is essential in case of any disease. The automatic liver segmentation approach has been studied dur- ing the past years and different segmentation techniques have been proposed, but this task remains a challenge and improvements are still required to further increase segmentation accuracy. In this work, an automatic, deep learning based approach is introduced, which is adaptable and it is able to handle smaller databases, including heterogeneous data. The method starts with a preprocessing to highlight the liver area using probability density function based estimation and supervoxel segmentation. Then, a modification of the 3D U-Net is introduced, which is called 3D RP-UNet and applies the ResPath in the 3D network. Finally, with liver-heart separation and morphological steps, the segmentation results are further refined. Segmentation results on three public databases showed that the proposed method performs robustly and achieves good segmentation performance compared to other state-of-the-art approaches in the majority of the evaluation metrics

    Automatic liver segmentation on CT images combining region-based techniques and convolutional features

    Get PDF
    Precise automatic liver segmentation plays an important role in computer-aided diagnosis of liver pathology. Despite many years of research, this is still a challenging task, especially when processing heterogeneous volumetric data from different sources. This study focuses on automatic liver segmentation on CT volumes proposing a fusion approach of traditional methods and neural network prediction masks. First, a region growing based method is proposed, which also applies active contour and thresholding based probability density function. Then the obtained binary mask is combined with the results of the 3D U-Net neural network improved by GrowCut approach. Extensive quantitative evaluation is carried out on three different CT datasets, representing varying image characteristics. The proposed fusion method compensates for the drawbacks of the traditional and U-Net based approach, performs uniformly stable for heterogeneous CT data and its performance is comparable to the state-of-the-art, therefore it provides a promising segmentation alternative

    Comprehensive deep learning-based framework for automatic organs-at-risk segmentation in head-and-neck and pelvis for MR-guided radiation therapy planning

    Get PDF
    Introduction: The excellent soft-tissue contrast of magnetic resonance imaging (MRI) is appealing for delineation of organs-at-risk (OARs) as it is required for radiation therapy planning (RTP). In the last decade there has been an increasing interest in using deep-learning (DL) techniques to shorten the labor-intensive manual work and increase reproducibility. This paper focuses on the automatic segmentation of 27 head-and-neck and 10 male pelvis OARs with deep-learning methods based on T2-weighted MR images.Method: The proposed method uses 2D U-Nets for localization and 3D U-Net for segmentation of the various structures. The models were trained using public and private datasets and evaluated on private datasets only.Results and discussion: Evaluation with ground-truth contours demonstrated that the proposed method can accurately segment the majority of OARs and indicated similar or superior performance to state-of-the-art models. Furthermore, the auto-contours were visually rated by clinicians using Likert score and on average, 81% of them was found clinically acceptable

    Deep-learning-based segmentation of organs-at-risk in the head for MR-assisted radiation therapy planning

    No full text
    Segmentation of organs-at-risk (OAR) in MR images has several clinical applications; including radiation therapy (RT) planning. This paper presents a deep-learning-based method to segment 15 structures in the head region. The proposed method first applies 2D U-Net models to each of the three planes (axial, coronal, sagittal) to roughly segment the structure. Then, the results of the 2D models are combined into a fused prediction to localize the 3D bounding box of the structure. Finally, a 3D U-Net is applied to the volume of the bounding box to determine the precise contour of the structure. The model was trained on a public dataset and evaluated on both public and private datasets that contain T2-weighted MR scans of the head-and-neck region. For all cases the contour of each structure was defined by operators trained by expert clinical delineators. The evaluation demonstrated that various structures can be accurately and efficiently localized and segmented using the presented framework. The contours generated by the proposed method were also qualitatively evaluated. The majority (92%) of the segmented OARs was rated as clinically useful for radiation therapy
    corecore